Fairness in Natural Language Processing

Grant number: DP200102519 | Funding period: 2020 - 2023

Completed

Abstract

Natural language processing (NLP) has achieved spectacular commercial successes in recent years, and has been deployed across an ever-increasing breadth of devices and application areas. At the same time, there has been stark evidence to indicate that naively-trained models amplify biases in training data, and perform inconsistently across text relating to different demographic groupings of individuals. This project aims to systematically quantify the extent of such biases, and develop models that are both more socially equitable, as well as less prone to expose private data in the learned representations. In doing so, it will make NLP more accessible to new populations of users, and remove ..

View full description

University of Melbourne Researchers